Updated 2020 by JCB.
Original by John Baez.
While for the most part an FAQ covers the answers to frequently asked questions whose answers are known, in physics there are also plenty of simple and interesting questions whose answers are not known. Here we list some of these. We could have called this section Frequently Unanswered Questions, but the resulting acronym would have been rather rude.
Before you set about answering these questions on your own, it's worth noting that while nobody knows what the answers are, a great deal of of work has already been done on most of these subjects. So, do plenty of research and ask around before you try to cook up a theory that'll answer one of these and win you the Nobel prize! You'll probably need to really know physics inside and out before you make any progress on these.
The following partial list of open questions is divided into five groups:
But given the implications of particle physics and nonlinear dynamics on cosmology, and other connections between the groups, the division is somewhat artificial, so the classification here is somewhat arbitrary.
There are many other interesting open questions in physics. Their omission is not a judgement about their importance: this article is incomplete and needs to be improved.
Since this article was last updated in 2012, a lot of progress has been made in answering some open questions in physics. Two seem to have been completely settled: "does the Higgs boson exist?" and "do gravitational waves exist?". The answer seems to be a clear "yes" in both cases. So, we have removed these questions from the list. Many of the other questions need to be updated; this will take a lot of work.
For more details, try this:
- Sonoluminescence, Wikipedia.
To learn more about superconductivity, see this web page and its many links:
- Superconductivity, Wikipedia.
This is more of a question of mathematical physics than physics per se—but it's related to the previous question, since (one might argue) how can we deeply understand turbulence if we don't even know that the equations for fluid motion have solutions? At the turn of the millennium, the Clay Mathematics Institute offered a $1,000,000 prize for solving this problem. For details, see:
- Clay Mathematics Institute, Navier–Stokes Equation.
Many physicists think these issues are settled, at least for most practical purposes. But some still think the last word has not been heard. Asking about this topic in a roomful of physicists is the best way to start an argument, unless they all say "Oh no, not that again!". There are many books to read on this subject, but most of them disagree.
This question is to some extent impacted by the previous one, but it also has a strong engineering aspect to it. Some physicists think quantum computers are impossible in principle; more think they are possible in principle, but are still unsure if they will ever be practical.
Here are some ways to learn more about quantum computation:
- Centre for Quantum Computation home page.
- John Preskill, course notes on quantum computation.
- Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information, Cambridge U. Press, Cambridge, 2000. Errata, table of contents and Chapter 1 available here.
We still don't know, but in 2003 some important work was done on this issue:
- Neil J. Cornish, David N. Spergel, Glenn D. Starkman and Eiichiro Komatsu, Constraining the Topology of the Universe.
Briefly, the Wilkinson Microwave Anisotropy Probe (WMAP) was used to rule out nontrivial topology within a distance of 78,000 million light years—at least for a large class of models. For the precise details, you'll have to read the article!
Here are two pieces of required reading for anyone interested in this tough question:
- Huw Price, Time's Arrow and Archimedes' Point: New Directions for a Physics of Time, Oxford University Press, Oxford, 1996.
- H. D. Zeh, The Physical Basis of the Direction of Time, second edition, Springer Verlag, Berlin, 1992.
There's been some progress on this one recently. Starting in the late 1990s, a bunch of evidence has accumulated suggesting that the universe is not slowing down enough to recollapse in a so-called "big crunch". In fact, it seems that some form of "dark energy" is making the expansion speed up! We know very little about dark energy; it's really just a name for any invisible stuff that has enough negative pressure compared to its energy density that it tends to make the expansion of the universe tend to accelerate, rather than slow down. (In general relativity, energy density tends to make the expansion slow down, but negative pressure has the opposite effect.)Einstein introduced dark energy to physics under the name of "the cosmological constant" when he was trying to explain how a static universe could fail to collapse. This constant simply said what the density dark energy was supposed to be, without providing any explanation for its origin. When Hubble observed the redshift of light from distant galaxies, and people concluded the universe was expanding, the idea of a cosmological constant fell out of fashion and Einstein called it his "greatest blunder". But now that the expansion of the universe seems to be accelerating, a cosmological constant or some other form of dark energy seems plausible.
For an examination of what an ever-accelerating expansion might mean for our universe, see:
- John Baez, The End of the Universe.
But, we still can't be sure the universe will expand forever, because the possibility remains that at some point the dark energy will go away, switch sign, or get bigger! Here's a respectable paper suggesting that the dark energy will change sign and make the universe recollapse in a big crunch:
- Renata Kallosh and Andrei Linde, Dark Energy and the Fate of the Universe.
But here's a respectable paper suggesting the exact opposite: that the dark energy will get bigger and tear apart the universe in a "big rip":
- Robert R. Caldwell, Marc Kamionkowski, and Nevin N. Weinberg, Phantom Energy and Cosmic Doomsday.
In short, the ultimate fate of the universe remains an open question!
But, before you launch into wild speculations, it's worth emphasizing that the late 1990s and early 2000s have seen a real revolution in experimental cosmology, which answered many open questions (for example: "how long ago was the Big Bang?") in shockingly precise ways (about 13,700 million years). For good introduction to this material, try:
- Ned Wright, Cosmology Tutorial and Frequently Asked Questions in Cosmology.
Our evidence concerning the expansion of the universe, dark energy, and dark matter now comes from a wide variety of sources, and what makes us confident we're on the right track is how nicely all this data agrees. People are getting this data from various sources including:
- Distant Supernovae. See especially these two experimental groups:
- The High-z Supernova Search Team.
See also their big paper.- The Supernova Cosmology Project.
See also their big paper.- The Cosmic Microwave Background (CMB). There have been lots of great experiments measuring little ripples in the background radiation left over from the Big Bang. For example:
- Large-Scale Structure. Detailed studies of galactic clustering and how it changes with time give us lots of information about dark matter. Here's the 800-pound gorilla in this field:
As mentioned above, evidence has been coming in that suggests the universe is full of some sort of "dark energy" with negative pressure. For example, an analysis of data from the Wilkinson Microwave Anisotropy Probe in 2003 suggested that 73% of the energy density of the universe is in this form! But even this is right and dark energy exists, we're still in the dark about what it is.
The simplest model is a cosmological constant, meaning that so-called "empty" space actually has a negative pressure and positive energy density, with the pressure exactly equal to minus the energy density in units where the speed of light is 1. But nobody has had much luck explaining why empty space should be like this, especially with an energy density as small as what we seem to be observing: about 6 × 10−30 grams per cubic centimeter if we use Einstein's E = mc2 to convert it into a mass density. Other widely studied possibilities for dark matter include various forms of "quintessence". But, this term means little more than "some mysterious field with negative pressure", and there's little understanding of why such a field should exist.
For more details, try these:
- Ned Wright, Vacuum Energy Density, or: How Can Nothing Weigh Something?
- John Baez, What's the Energy Density of the Vacuum?
- Sean Carroll, The Cosmological Constant.
The third is the most detailed, and it has lots of good references for further study.
Since the late 1990s, a consensus has emerged that some sort of "cold dark matter" is needed to explain all sorts of things we see. For example, in 2003 an analysis of data from the Wilkinson Microwave Anisotropy Probe suggested that the energy density of the universe consists of about 23% cold dark matter, as compared to only 4% ordinary matter. (The rest is dark energy.)
Unfortunately nobody knows what this cold dark matter is! It probably can't be ordinary matter we've neglected, or neutrinos, since these wouldn't have been sufficiently "cold" in the early universe to collapse into the lumps needed for galaxy formation. There are many theories about what it might be. An ever-intensifying search for direct evidence of dark matter particles has so far been unsuccessful. We could be very confused about something, like our theory of gravity. As the article by Bertone and Tait says:
There is a growing sense of "crisis" in the dark-matter particle community, which arises from the absence of evidence for the most popular candidates for dark-matter particles—such as weakly interacting massive particles, axions and sterile neutrinos—despite the enormous effort that has gone into searching for these particles.For details, try these:
- Gianfranco Bertone and Tim M. P. Tait, A New Era in the Search for Dark Matter.
- Gianfranco Bertone and Dan Hooper, A History of Dark Matter.
- Timothy J. Sumner, Experimental Searches for Dark Matter.
The last of these has lots of references, but only up to 2002.
In 2003 the case for inflation was bolstered by the Wilkinson Microwave Anisotropy Probe, which made detailed measurements of "anisotropies" (slight deviations from perfect evenness) in the cosmic microwave background radiation. The resulting "cosmic microwave background power spectrum" shows peaks and troughs whose precise features should be sensitive to many details of the very early history of the Universe. Models that include inflation seem to fit this data very well, while those that don't, don't.
Even so, the mechanism behind inflation remains somewhat mysterious. Inflation can be nicely explained using quantum field theory by positing the existence of a special particle called the "inflaton", which gave rise to extremely high negative pressure before it decayed into other particles. This may sound wacky, but it's really not. The only problem is that nobody has any idea how this particle fits into known physics. For example, it's not part of the Standard Model.
For details, try:
- Charles H. Lineweaver, Inflation and the Cosmic Microwave Background.
Also available on the astrophysics arXiv.
As of 2004 this was quite a hot topic in astrophysics. See for example:
- Volker Bromm and Richard B. Larson, The First Stars.
Gamma ray bursters (GRBs) appear as bursts of gamma rays coming from points randomly scattered in the sky. These bursts are very brief, lasting between a few milliseconds to a few hundred seconds. For a long time there were hundreds of theories about what caused them, but very little evidence for any of these theories, since nothing was ever seen at the location where one of these bursts occurred. Their random distribution eventually made a convincing case that they occurred not within our solar system or within our galaxy, but much farther away. Given this, it was clear that they must be extraordinarily powerful.
Starting in the late 1990s, astronomers made a concerted effort to catch gamma ray bursters in the act, focusing powerful telescopes to observe them in the visible and ultraviolet spectrum moments after a burst was detected. These efforts paid off in 1999 when one was seen to emit visible light for as long as a day after the burst occurred. A redshift measurement of z = 1.6 indicated that the gamma ray burster was about 10,000 million light years away. If the burst of gamma rays was omnidirectional, this would mean that its power was about 1016 times that of our sun—for a very short time. For details on this discovery, see:
- Burst and Transient Source Experiment (BATSE), GOTCHA! The Big One That Didn't Get Away, Gamma Ray Burst Headlines, January 27, 1999.
A more detailed observation of a burst on March 3, 2003 convinced many astrophysicists that at least some gamma-ray bursters are so-called "hypernovae". A hypernova is an exceptionally large supernova formed by the nearly instantaneous collapse of the core of a very large star, at least 10 times the mass of the sun, which has already blown off most of its hydrogen. Such stars are called Wolf–Rayet stars. The collapse of such a star need not be spherically symmetric, so the gamma ray burst could be directional, reducing the total power needed to explain the brightness we see here (if the burst happened to point towards us). For more, try:
- European Southern Observatory (ESO), Cosmological Gamma-Ray Bursts and Hypernovae Conclusively Linked, Press Release, June 18, 2003.
It's hard to resist quoting the theory described here:
Here is the complete story about GRB 030329, as the astronomers now read it.
Thousands of years prior to this explosion, a very massive star, running out of hydrogen fuel, let loose much of its outer envelope, transforming itself into a bluish Wolf–Rayet star. The remains of the star contained about 10 solar masses worth of helium, oxygen and heavier elements.
In the years before the explosion, the Wolf–Rayet star rapidly depleted its remaining fuel. At some moment, this suddenly triggered the hypernova/gamma-ray burst event. The core collapsed, without the outer part of the star knowing. A black hole formed inside, surrounded by a disk of accreting matter. Within a few seconds, a jet of matter was launched away from that black hole.
The jet passed through the outer shell of the star and, in conjunction with vigorous winds of newly formed radioactive nickel-56 blowing off the disk inside, shattered the star. This shattering, the hypernova, shines brightly because of the presence of nickel. Meanwhile, the jet plowed into material in the vicinity of the star, and created the gamma-ray burst which was recorded some 2,650 million years later by the astronomers on Earth. The detailed mechanism for the production of gamma rays is still a matter of debate but it is either linked to interactions between the jet and matter previously ejected from the star, or to internal collisions inside the jet itself.
This scenario represents the "collapsar" model, introduced by American astronomer Stan Woosley (University of California, Santa Cruz) in 1993 and a member of the current team, and best explains the observations of GRB 030329.
"This does not mean that the gamma-ray burst mystery is now solved", says Woosley. "We are confident now that long bursts involve a core collapse and a hypernova, likely creating a black hole. We have convinced most skeptics. We cannot reach any conclusion yet, however, on what causes the short gamma-ray bursts, those under two seconds long."
Indeed, there seem to be at least two kinds of gamma-ray bursters, the "long" and "short" ones. Nobody has caught the short ones in time to see their afterglows, so they are more mysterious. For more information, try these:
- NASA, Gamma Ray Bursts.
- Edo Berger, Gamma-ray Burst FAQ.
- Wikipedia, Gamma Ray Bursters.
- Peter Mészáros, Gamma-ray Burst Physics.
At the time this was written, NASA was scheduled to launch a satellite called "Swift", specially devoted to gamma-ray burst detection, in September 2004. For details, see:
Cosmic rays are high-energy particles, mainly protons and alpha particles, which come from outer space and hit Earth's atmosphere producing a shower of other particles. Most of these are believed to have picked up their energy by interacting with shock waves in the interstellar medium. But the highest-energy ones remain mysterious—nobody knows how they could have acquired such high energies.
The record is a 1994 event detected by the Fly's Eye in Utah, which recorded a shower of particles produced by a cosmic ray of about 300 EeV. (An EeV is an "exa electron volt", which is the energy an electron picks up when it accelerates through an electrostatic potential increase of 1018 volts. 300 EeV is about 50 joules—the energy of a one-kilogram mass moving at 10 meters/second, presumably all packed into one particle!) A similar event has been detected by the Japanese scintillation array AGASA.
Nobody knows how such high energies are attained—perhaps as a side effect of the shock made by a supernova or gamma-ray burster? The puzzle is especially acute because because particles with energies like these are expected to interact with the cosmic microwave background radiation and lose energy after travelling only moderate extragalactic distances, say 100 mega light years. This effect is called the Greisen–Zatsepin–Kuz'min (or GZK) cutoff. So, either our understanding of the GZK cutoff is mistaken, or ultra-high-energy cosmic rays come from relatively nearby—in cosmological terms, that is.
Right now the data is confusing, because two major experiments on ultra-high-energy cosmic rays have yielded conflicting results. The Fly's Eye seems to see a sharp drop-off in the number of cosmic rays above 100 EeV, while the AGASA detector does not. People hope that the Pierre Auger cosmic ray observatory, being built in western Argentina, will settle the question.
For more information, try these:
- HiRes—the High Resolution Fly's Eye experiment.
- AGASA—the Akeno Giant Air Shower Array.
- Pierre Auger Observatory.
- A. A. Watson, Observations of Ultra-High Energy Cosmic Rays.
- D. J. Bird et al., Detection of a cosmic ray with measured energy well beyond the expected spectral cutoff due to cosmic microwave radiation.
The Pioneer 10 and Pioneer 11 spacecraft are leaving the the Solar System. Pioneer 10 sent back radio information about its location until January 2003, when it was about 80 times farther from the Sun than Earth is. Pioneer 11 sent back signals until September 1995, when its distance from the Sun was about 45 times Earth's.
The Pioneer missions have yielded the most precise information we have about navigation in deep space. But analysis of their radio tracking data indicates a small unexplained acceleration towards the Sun! The magnitude of this acceleration is roughly 10−9 meters per second per second. This is known as the "Pioneer anomaly".
This anomaly has also been seen in the Ulysses spacecraft, and possibly also in the Galileo spacecraft, though the data is much more noisy, since these were Jupiter probes, hence much closer to the Sun, where there is a lot more pressure from solar radiation. The Viking mission to Mars did not detect the Pioneer anomaly—and it would have, had an acceleration of this magnitude been present, because its radio tracking was accurate to about 12 meters.
Many physicists and astronomers have tried to explain the Pioneer anomaly using conventional physics, but so far nobody seems to have succeeded. There are many proposals that try to explain the anomaly using new physics—in particular, modified theories of gravity. But there is no consensus that any of these explanations are right, either. For example, explaining the Pioneer anomaly using dark matter would require more than 0.0003 solar masses of dark matter within 50 astronomical units of the Sun (an astronomical unit is the distance between Sun and Earth). But, this is in conflict with our calculations of planetary orbits.
For more information, see:
- Wikipedia, Pioneer Anomaly.
- Chris P. Duif, Pioneer Anomaly—Literature and Links.
- The Pioneer Collaboration, A Mission to Explore the Pioneer Anomaly.
Proving a version of Cosmic Censorship is a matter of mathematical physics rather than physics per se, but doing so would increase our understanding of general relativity. There are actually at least two versions: Penrose formulated the "Strong Cosmic Censorship Conjecture" in 1986 and the "Weak Cosmic Censorship Hypothesis" in 1988. Very roughly, strong cosmic censorship asserts that under reasonable conditions general relativity is a deterministic theory, while weak cosmic censorship asserts that any singularity produced by gravitational collapse is hidden behind an event horizon. Despite their names, strong cosmic censorship does not imply weak cosmic censorship.
In 1991, Preskill and Thorne made a bet against Hawking in which they claimed that weak cosmic censorship was false. Hawking conceded this bet in 1997 when a counterexample was found by Matthew Choptuik. This features finely tuned infalling matter poised right on the brink of forming a black hole. It almost creates a region from which light cannot escape—but not quite. Instead, it creates a naked singularity!
Given the delicate nature of this construction, Hawking did not give up. Instead he made a new bet, which says that weak cosmic censorship holds "generically"—that is, except for very unusual conditions that require infinitely careful fine-tuning to set up. For an overview see:
- Robert Wald, Gravitational Collapse and Cosmic Censorship.
In 1999, Demetrios Christodoulou proved that for spherically symmetric solutions of Einstein's equation coupled to a massless scalar field, weak cosmic censorship holds generically. For a review of this and also Choptuik's work, see:
- Carsten Gundlach, Critical Phenomena in Gravitational Collapse.
While spherical symmetry is a very restrictive assumption, this result is a good example of how, with plenty of work, we can make progress in rigorously settling the questions raised by general relativity.
What about strong cosmic censorship? In general relativity, for each choice of initial data—that is, each choice of the gravitational field and other fields at "time zero"—there is a region of spacetime whose properties are completely determined by this choice. The question is whether this region is always the whole universe. That is: does the present determine the whole future?
The answer is: not always! By carefully choosing the fields at time zero you can manufacture counterexamples. But Penrose, knowing this, claimed only that generically the fields at time zero determine the whole future of the universe.
In 2017, Mihalis Dafermos and Jonathan Luk showed that even this is false if you don't demand that the fields stay smooth. But perhaps the conjecture can be saved if we assume the fields stay sufficiently smooth:
- Kevin Hartnett, Mathematicians Disprove Conjecture Made to Save Black Holes.
- Oscar J.C. Dias, Harvey S. Reall and Jorge E. Santos, Strong Cosmic Censorship: Taking the Rough with the Smooth.
Besides the particles that carry forces (the photon, W and Z boson, and gluons), all elementary particles we have seen so far fit neatly into three "generations" of particles called leptons and quarks. The first generation consists of:
- the electron
- the electron neutrino
- the up quark
- the down quark
The second consists of:
- the muon
- the muon neutrino
- the charmed quark
- the strange quark
and the third consists of:
- the tau
- the tau neutrino
- the top quark
- the bottom quark
How do we know there aren't more?
Ever since particle accelerators achieved the ability to create Z bosons in 1983, our best estimates on the number of generations have come from measuring the rate at which Z bosons decay into completely invisible stuff. The underlying assumption is that when this happens, the Z boson is decaying into a neutrino–antineutrino pair as predicted by the Standard Model. Each of the three known generations contains a neutrino which is very light. If this pattern holds up, the total rate of "decay into invisible stuff" should be proportional to the number of generations!
Experiments like this keep indicating there are three generations of this sort. So, most physicists feel sure there are exactly three generations of quarks and leptons. The question then becomes "why?"—and so far we haven't a clue!
For details see:
- D. Karlen, The number of neutrino types from collider experiments, revised 2008.
Honesty compels us to point out a slight amount of wiggle room in the remarks above. Conservation of energy prevents the Z from decaying into a neutrino–antineutrino pair if the neutrino in question is of a sort that has more than half the mass of Z. So, if there were a fourth generation with a very heavy neutrino, we couldn't detect it by studying the decay of Z bosons. But all three known neutrinos have a mass less than 1/3000 times the Z mass, so a fourth neutrino would have to be much heavier than the rest to escape detection this way.
Another bit of wiggle room lurks in the phrase "decaying into a neutrino–antineutrino pair in the manner predicted by the Standard Model". If there were a fourth generation with a neutrino that didn't act like the other three, or no neutrino at all, we might not see it. But in this case it would be stretching language a bit to speak of a "fourth generation", since the marvelous thing about the three known generations is how they're completely identical except for the values of certain constants like masses.
If you're familiar with particle physics, you'll know it goes much deeper than this: the Standard Model says every generation of particles has precisely the same mathematical structure except for some numbers that describe Higgs couplings. We don't know any reason for this structure, although the requirement of "anomaly cancellation" puts some limits on what it can be.
If you're not an expert on particle physics, perhaps these introductions to the Standard Model will help explain things:
- Wikipedia, The Standard Model.
- M. J. Herrero, The Standard Model.
The second is much more detailed and technical than the first.
Starting in the 1990s, our understanding of neutrinos has dramatically improved, and the puzzle of why we see about 1/3 as many electron neutrinos coming from the sun as naively expected has pretty much been answered: the three different flavors of neutrino—electron, muon and tau—turn into each other, because these flavors are not the same as the three "mass eigenstates", which have a definite mass. But the wide variety of neutrino experiments over the last thirty years have opened up other puzzles.
For example, we don't know the origin of the neutrinos' masses. Do the observed left-handed neutrinos get their mass by coupling to the Higgs and a right-handed partner, the way the other quarks and leptons do? This would require the existence of so-far unseen right-handed neutrinos. Do they get their mass by coupling to themselves? This could happen if they are "Majorana fermions": that is, their own antiparticles. They could also get a mass in other, even more exciting ways, like the "see-saw mechanism". This requires them to couple to a very massive right-handed particle, and could explain their very light masses.
Even what we've actually observed raises puzzles. With many experiments going on, there are often "anomalies", but many of these go away after more careful study. Here's a challenge that won't just go away with better data: the 3×3 matrix relating the 3 flavors of neutrino to the 3 neutrino mass eigenstates, called the Pontecorvo–Maki–Nakagawa–Sakata matrix, is much further from the identity matrix than the analogous matrix for quarks, called the Cabibbo–Kobayashi–Maskawa matrix. In simple terms, this means that each of the three flavors of neutrino is a big mix of different masses. Nobody knows why these matrices take the values they do, or why they're so different from each other.
For details, try:
- John Baez, Neutrinos and the Mysterious Pontevorco–Maki–Nakagawa–Sakata Matrix.
- Rabindra N. Mohapatra, Physics of Neutrino Mass.
- A. Baha Balantekin and Boris Kayser, On the Properties of Neutrinos.
- M. C. Gonzalez-Garcia and M. Yokoya, Neutrino Masses, Mixing, and Oscillations.
- The Neutrino Oscillation Industry.
The last of these has lots of links to the web pages of research groups doing experiments on neutrinos. It's indeed a big industry!
Violation of P symmetry, meaning the symmetry between left and right, is strongly visible in the Standard Model: for example, all directly observed neutrinos are "left handed". But violation of CP symmetry is subtler: in the Standard Model it appears solely in interactions between the Higgs boson and quarks or leptons. Technically, it occurs because the numbers in the Cabibbo–Kobayashi–Maskawa matrix and Pontecorvo–Maki–Nakagawa–Sakata matrix (discussed in the previous question) are not all real numbers. Interestingly, this is only possible when there are 3 or more generations of quarks and/or leptons: with 2 or fewer generations the matrix can always be made real.
Does the strong force violate CP symmetry? In the Standard Model it would be very natural to add a CP-violating term to the equations describing the strong force, proportional to a constant called the θ angle. But experiments say the magnitude of the "θ angle" is less than 2 × 10−10. Is this angle zero or not? Nobody knows. Why is it so small? This is called the "strong CP problem". One possible solution, called the Peccei–Quinn mechanism, involves positing a new very light particle called the axion, which might also be a form of dark matter. But despite searches, nobody has found any axions.
- Wikipedia, CP Violation.
- Wikipedia, Strong CP Problem.
- Michael Beyer, editor, CP Violation in Particle, Nuclear, and Astrophysics, Springer, Berlin, 2008.
- I. Bigi, CP Violation—An Essential Mystery in Nature's Grand Design.
The last reference is nice but a bit dated: it was written in 1997.
As of 2020, experiments show the electric dipole moment of the electron is less than 1.1 × 10−29 electron charge centimeters. According to the Standard Model it should have a very small nonzero value due to CP violation by virtual quarks, but various extensions of the Standard Model predict a larger dipole moment.
Also as of 2020, experiments show the neutron's electric dipole moment is less than 1.8 × 10−26 e·cm. The Standard Model predicts a moment of about 10−31 e·cm, again due to CP violation by virtual quarks, and again various other theories predict a larger moment.
Measuring these moments could give new information on physics beyond the Standard Model.
- Wikipedia, Electron Electric Dipole Moment.
- Wikipedia, Neutron Electric Dipole Moment.
- Maxim Pospelov and Adam Ritz, Electric Dipole Moments as Probes of New Physics.
Most physicists believe the answers to all these questions are "yes". There are currently a number of experiments going on to produce and detect a quark-gluon plasma. It's believed that producing such a plasma at low pressures requires a temperature of 2 million million kelvins. Since this is 10,000 times hotter than the sun, and such extreme temperatures were last prevalent in our Universe only 1 microsecond after the Big Bang, these experiments are lots of fun. The largest, the Relativistic Heavy Ion Collider on Long Island, New York, began operation in 2000. It works by slamming gold nuclei together at outrageous speeds. For details, see:
But, in addition to such experimental work, a lot of high-powered theoretical work is needed to understand just what QCD predicts, both in extreme situations like these, and for ordinary matter. In fact, it's a great challenge to use QCD to predict the masses of protons, neutrons, pions and the like to an accuracy greater than about 10%. Doing so makes heavy use of supercomputers, but there are also fundamental obstacles to good numerical computations, like the "fermion doubling problem", where bright new ideas are needed. See for example:
These are questions of mathematical physics rather than physics per se, but they are important. At the turn of the millennium, the Clay Mathematics Institute offered a $1,000,000 prize for providing a mathematically rigorous foundation for the quantum version of SU(2) Yang–Mills theory in four spacetime dimensions, and proving that there's a "mass gap"—meaning that the lightest particle in this theory has nonzero mass. For details see:
- Clay Mathematics Institute, Yang–Mills and Mass Gap.
Most "grand unified theories" (GUTs) predict that the proton decays, but so far experiments have (for the most part) only put lower limits on the proton lifetime. As of 2002, the lower limit on the mean life of the proton was somewhere between 1031 and 1033 years, depending on the presumed mode of decay, or 1.6 x 1025 years regardless of the mode of decay.
Proton decay experiments are heroic undertakings, involving some truly huge apparatus. Right now the biggest one is "Super-Kamiokande". This was built in 1995, a kilometer underground in the Mozumi mine in Japan. This experiment is mainly designed to study neutrinos, but it doubles as a proton decay detector. It consists of a tank holding 50,000 tons of pure water, lined with 11,200 photomultiplier tubes which can detect very small flashes of light. Usually these flashes are produced by neutrinos and various less interesting things (the tank is deep underground to minimize the effect of cosmic rays). But, flashes of light would also be produced by certain modes of proton decay, if this ever happens.
Super-Kamiokande was beginning to give much improved lower bounds on the proton lifetime, and excellent information on neutrino oscillations, when a strange disaster happened on November 12, 2001. The tank was being refilled with water after some burnt-out photomultiplier tubes had been replaced. Workmen standing on styrofoam pads on top of some of the bottom tubes made small cracks in the neck of one of the tubes, causing that tube to implode. The resulting shock wave started a chain reaction in which about 7000 of the photomultiplier tubes were destroyed! Luckily, after lots of hard work the experiment was rebuilt by December 2002.
In 2000, after about 20 years of operation, the Kolar Mine proton decay experiment claimed to have found proton decay, and their team of physicists gave an estimate of 1031 years for the proton lifetime. Other teams are skeptical.
For more details, try these:
- Super-Kamiokande photo album.
- The IMB Proton Decay Experiment.
- H. Adarkar et al, Experimental Evidence for G.U.T. Proton Decay.
- Jogesh C. Pati, With Grand Unification Signals in, Can Proton Decay be Far Behind?
Of course their mass in kilograms depends on an arbitrary human choice of units, but their mass ratios are fundamental constants of nature. For example, the muon is about 206.76828 times as heavy as the electron. We have no explanation of this sort of number! We attribute the masses of the elementary particles to the strength of their interaction with the Higgs boson (see above), but we have no understanding of why these interactions are as strong as they are.
Particle masses and strengths of the fundamental forces constitute most of the 26 fundamental dimensionless constants of nature. Another one is the cosmological constant—assuming it's constant. Others govern the oscillation of neutrinos (see below). So, we can wrap a bunch of open questions into a bundle by asking: Why do these 26 dimensionless constants have the values they do?
Perhaps the answer involves the Anthropic Principle, but perhaps not. Right now, we have no way of knowing that this question has any answer at all!
For a list of these 26 dimensionless constants, try:
- John Baez, How Many Fundamental Constants Are There?
Very roughly speaking, the Anthropic Principle says that our universe must be approximately the way it is for intelligent life to exist, so that the mere fact we are asking certain questions constrains their answers. This might "explain" the values of fundamental constants of nature, and perhaps other aspects of the laws of physics as well. Or, it might not.
Different ways of making the Anthropic Principle precise, and a great deal of evidence concerning it, can be found in a book by Barrow and Tipler:
- John D. Barrow and Frank J. Tipler, The Cosmological Anthropic Principle, Oxford U. Press, Oxford, 1988.
This book started a heated debate on the merits of the Anthropic Principle, which continues to this day. Some people have argued the principle is vacuous. Others have argued that it distracts us from finding better explanations of the facts of nature, and is thus inherently unscientific. For one interesting view, see:
- Max Tegmark, The Mathematical Universe
In 1994 Lee Smolin advocated an alternative but equally mind-boggling idea, namely that the parameters of the Universe are tuned, not to permit intelligent life, but to maximize black hole production! The mechanism he proposes for this is a kind of cosmic Darwinian evolution, based on the (unproven) theory that universes beget new baby universes via black holes. For details, see:
- Lee Smolin, The Life of the Cosmos, Crown Press, 1997.
- Lee Smolin, The Fate of Black Hole Singularities and the Parameters of the Standard Models of Particle Physics and Cosmology.
More recently, the string theorist Leonard Susskind has argued that the "string theory vacuum" which describes the laws of physics we see must be chosen using the Anthropic Principle:
- Edge: The Landscape—a talk with Leonard Susskind.
Despite a huge amount of work on string theory over the last decades, it still has made no predictions that we can check with our particle accelerators, whose failure would falsify the theory. The closest it comes so far is by predicting the existence of a "superpartner" for each of the observed types of particle. None of these superpartners have ever been seen. It is possible that the Large Hadron Collider will detect signs of the lightest superpartner. It's also possible that dark matter is due to a superpartner! But, these remain open questions.
It's also interesting to see what string theorists regard as the biggest open questions in physics. At the turn of the millennium, the participants of the conference Strings 2000 voted on the ten most important physics problems. Here they are:
- Are all the (measurable) dimensionless parameters that characterize the physical universe calculable in principle or are some merely determined by historical or quantum mechanical accident and incalculable?
- How can quantum gravity help explain the origin of the universe?
- What is the lifetime of the proton and how do we understand it?
- Is Nature supersymmetric, and if so, how is supersymmetry broken?
- Why does the universe appear to have one time and three space dimensions?
- Why does the cosmological constant have the value that it has, is it zero and is it really constant?
- What are the fundamental degrees of freedom of M-theory (the theory whose low-energy limit is eleven-dimensional supergravity and which subsumes the five consistent superstring theories) and does the theory describe Nature?
- What is the resolution of the black hole information paradox?
- What physics explains the enormous disparity between the gravitational scale and the typical mass scale of the elementary particles?
- Can we quantitatively understand quark and gluon confinement in Quantum Chromodynamics and the existence of a mass gap?
For details see:
- Strings 2000, Physics Problems for the Next Millennium.
This last question sits on the fence between cosmology and particle physics:
The answer to this question will necessarily rely upon, and at the same time may be a large part of, the answers to many of the other questions above.